17 research outputs found

    Toward privacy-aware federated analytics of cohorts for smart mobility

    Get PDF
    Location-based Behavioral Analytics (LBA) holds a great potential for improving the services available in smart cities. Naively implemented, such an approach would track the movements of every citizen and share their location traces with the various smart service providers—similar to today's Web analytics systems that track visitors across the web sites they visit. This study presents a novel privacy-aware approach to location-based federated analytics that removes the need for individuals to share their location traces with a central server. The general approach is to model the behavior of cohorts instead of modeling specific users. Using a federated approach, location data is processed locally on user devices and only shared in anonymized fashion with a server. The server aggregates the data using Secure Multiparty Computation (SMPC) into service-defined cohorts, whose data is then used to provide cohort analytics (e.g., demographics) for the various smart service providers. The approach was evaluated on three real-life datasets with varying dropout rates, i.e., clients not being able to participate in the SMPC rounds. The results show that our approach can privately estimate various cohort demographics (e.g., percentages of male and female visitors) with an error between 0 and 8 percentage points relative to the actual cohort percentages. Furthermore, we experimented with predictive models for estimating these cohort percentages 1-week ahead. Across all three datasets, the best-performing predictive model achieved a Pearson's correlation coefficient above 0.8 (strong correlation), and a Mean Absolute Error (MAE) between 0 and 10 (0 is the minimum and 100 is the maximum). We conclude that privacy-aware LBA can be achieved using existing mobile technologies and federated analytics

    Handling Missing Data For Sleep Monitoring Systems

    Get PDF
    Sensor-based sleep monitoring systems can be used to track sleep behavior on a daily basis and provide feedback to their users to promote health and well-being. Such systems can provide data visualizations to enable self-reflection on sleep habits or a sleep coaching service to improve sleep quality. To provide useful feedback, sleep monitoring systems must be able to recognize whether an individual is sleeping or awake. Existing approaches to infer sleep-wake phases, however, typically assume continuous streams of data to be available at inference time. In real-world settings, though, data streams or data samples may be missing, causing severe performance degradation of models trained on complete data streams. In this paper, we investigate the impact of missing data to recognize sleep and wake, and use regression-and interpolation-based imputation strategies to mitigate the errors that might be caused by incomplete data. To evaluate our approach, we use a data set that includes physiological traces-collected using wristbands-, behavioral data-gathered using smartphones-and self-reports from 16 participants over 30 days. Our results show that the presence of missing sensor data degrades the balanced accuracy of the classifier on average by 10-35 percentage points for detecting sleep and wake depending on the missing data rate. The impu-tation strategies explored in this work increase the performance of the classifier by 4-30 percentage points. These results open up new opportunities to improve the robustness of sleep monitoring systems against missing data

    Datasets for Cognitive Load Inference Using Wearable Sensors and Psychological Traits

    No full text
    This study introduces two datasets for multimodal research on cognitive load inference and personality traits. Different to other datasets in Affective Computing, which disregard participants’ personality traits or focus only on emotions, stress, or cognitive load from one specific task, the participants in our experiments performed seven different tasks in total. In the first dataset, 23 participants played a varying difficulty (easy, medium, and hard) game on a smartphone. In the second dataset, 23 participants performed six psychological tasks on a PC, again with varying difficulty. In both experiments, the participants filled personality trait questionnaires and marked their perceived cognitive load using NASA-TLX after each task. Additionally, the participants’ physiological response was recorded using a wrist device measuring heart rate, beat-to-beat intervals, galvanic skin response, skin temperature, and three-axis acceleration. The datasets allow multimodal study of physiological responses of individuals in relation to their personality and cognitive load. Various analyses of relationships between personality traits, subjective cognitive load (i.e., NASA-TLX), and objective cognitive load (i.e., task difficulty) are presented. Additionally, baseline machine learning models for recognizing task difficulty are presented, including a multitask learning (MTL) neural network that outperforms single-task neural network by simultaneously learning from the two datasets. The datasets are publicly available to advance the field of cognitive load inference using commercially available devices

    How Accurately Can Your Wrist Device Recognize Daily Activities and Detect Falls?

    No full text
    Although wearable accelerometers can successfully recognize activities and detect falls, their adoption in real life is low because users do not want to wear additional devices. A possible solution is an accelerometer inside a wrist device/smartwatch. However, wrist placement might perform poorly in terms of accuracy due to frequent random movements of the hand. In this paper we perform a thorough, large-scale evaluation of methods for activity recognition and fall detection on four datasets. On the first two we showed that the left wrist performs better compared to the dominant right one, and also better compared to the elbow and the chest, but worse compared to the ankle, knee and belt. On the third (Opportunity) dataset, our method outperformed the related work, indicating that our feature-preprocessing creates better input data. And finally, on a real-life unlabeled dataset the recognized activities captured the subject’s daily rhythm and activities. Our fall-detection method detected all of the fast falls and minimized the false positives, achieving 85% accuracy on the first dataset. Because the other datasets did not contain fall events, only false positives were evaluated, resulting in 9 for the second, 1 for the third and 15 for the real-life dataset (57 days data)

    Breathing Rate Estimation from Head-Worn Photoplethysmography Sensor Data Using Machine Learning

    No full text
    Breathing rate is considered one of the fundamental vital signs and a highly informative indicator of physiological state. Given that the monitoring of heart activity is less complex than the monitoring of breathing, a variety of algorithms have been developed to estimate breathing activity from heart activity. However, estimating breathing rate from heart activity outside of laboratory conditions is still a challenge. The challenge is even greater when new wearable devices with novel sensor placements are being used. In this paper, we present a novel algorithm for breathing rate estimation from photoplethysmography (PPG) data acquired from a head-worn virtual reality mask equipped with a PPG sensor placed on the forehead of a subject. The algorithm is based on advanced signal processing and machine learning techniques and includes a novel quality assessment and motion artifacts removal procedure. The proposed algorithm is evaluated and compared to existing approaches from the related work using two separate datasets that contains data from a total of 37 subjects overall. Numerous experiments show that the proposed algorithm outperforms the compared algorithms, achieving a mean absolute error of 1.38 breaths per minute and a Pearson’s correlation coefficient of 0.86. These results indicate that reliable estimation of breathing rate is possible based on PPG data acquired from a head-worn device

    BAG-DSM: A Method for Generating Alternatives for Hierarchical Multi-Attribute Decision Models Using Bayesian Optimization

    No full text
    Funding Information: Funding: This work was partially funded by the Slovenian Research Agency (ARRS) under research core funding Knowledge Technologies No. P2-0103 (B), and by the Slovenian Ministry of Education, Science and Sport (funding agreement No. C3330-17-529020). Publisher Copyright: © 2022 by the authors. Licensee MDPI, Basel, Switzerland.Multi-attribute decision analysis is an approach to decision support in which decision alternatives are evaluated by multi-criteria models. An advanced feature of decision support models is the possibility to search for new alternatives that satisfy certain conditions. This task is important for practical decision support; however, the related work on generating alternatives for qualitative multi-attribute decision models is quite scarce. In this paper, we introduce Bayesian Alternative Generator for Decision Support Models (BAG-DSM), a method to address the problem of generating alternatives. More specifically, given a multi-attribute hierarchical model and an alternative representing the initial state, the goal is to generate alternatives that demand the least change in the provided alternative to obtain a desirable outcome. The brute force approach has exponential time complexity and has prohibitively long execution times, even for moderately sized models. BAGDSM avoids these problems by using a Bayesian optimization approach adapted to qualitative DEX models. BAG-DSM was extensively evaluated and compared to a baseline method on 43 different DEX decision models with varying complexity, e.g., different depth and attribute importance. The comparison was performed with respect to: the time to obtain the first appropriate alternative, the number of generated alternatives, and the number of attribute changes required to reach the generated alternatives. BAG-DSM outperforms the baseline in all of the experiments by a large margin. Additionally, the evaluation confirms BAG-DSM’s suitability for the task, i.e., on average, it generates at least one appropriate alternative within two seconds. The relation between the depth of the multi-attribute hierarchical models—a parameter that increases the search space exponentially— and the time to obtaining the first appropriate alternative was linear and not exponential, by which BAG-DSM’s scalability is empirically confirmed.Peer reviewe

    Datasets for cognitive load inference using wearable sensors and psychological traits

    Full text link
    This study introduces two datasets for multimodal research on cognitive load inference and personality traits. Different to other datasets in Affective Computing, which disregard participants’ personality traits or focus only on emotions, stress, or cognitive load from one specific task, the participants in our experiments performed seven different tasks in total. In the first dataset, 23 participants played a varying difficulty (easy, medium, and hard) game on a smartphone. In the second dataset, 23 participants performed six psychological tasks on a PC, again with varying difficulty. In both experiments, the participants filled personality trait questionnaires and marked their perceived cognitive load using NASA-TLX after each task. Additionally, the participants’ physiological response was recorded using a wrist device measuring heart rate, beat-to-beat intervals, galvanic skin response, skin temperature, and three-axis acceleration. The datasets allow multimodal study of physiological responses of individuals in relation to their personality and cognitive load. Various analyses of relationships between personality traits, subjective cognitive load (i.e., NASA-TLX), and objective cognitive load (i.e., task difficulty) are presented. Additionally, baseline machine learning models for recognizing task difficulty are presented, including a multitask learning (MTL) neural network that outperforms single-task neural network by simultaneously learning from the two datasets. The datasets are publicly available to advance the field of cognitive load inference using commercially available devices

    Towards smart glasses for facial expression recognition using OMG and machine learning

    No full text
    Abstract This study aimed to evaluate the use of novel optomyography (OMG) based smart glasses, OCOsense, for the monitoring and recognition of facial expressions. Experiments were conducted on data gathered from 27 young adult participants, who performed facial expressions varying in intensity, duration, and head movement. The facial expressions included smiling, frowning, raising the eyebrows, and squeezing the eyes. The statistical analysis demonstrated that: (i) OCO sensors based on the principles of OMG can capture distinct variations in cheek and brow movements with a high degree of accuracy and specificity; (ii) Head movement does not have a significant impact on how well these facial expressions are detected. The collected data were also used to train a machine learning model to recognise the four facial expressions and when the face enters a neutral state. We evaluated this model in conditions intended to simulate real-world use, including variations in expression intensity, head movement and glasses position relative to the face. The model demonstrated an overall accuracy of 93% (0.90 f1-score)—evaluated using a leave-one-subject-out cross-validation technique
    corecore